feat: add direct OPA evaluator with shared base infrastructure#3276
feat: add direct OPA evaluator with shared base infrastructure#3276joejstuart wants to merge 9 commits intoconforma:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds a functional OPA-based policy evaluator and base evaluator framework, enables in-memory input construction for image snapshots, wires the OPA evaluator into validation entry points and fallback paths, and introduces unit/integration/comparison tests plus BDD scenarios exercising OPA evaluation. ChangesOPA Evaluator Implementation
sequenceDiagram
actor User
participant CLI as CLI
participant ValidateCmd as cmd/validate/image.go
participant AppSnapshot as ApplicationSnapshotImage
participant OPAEvaluator as NewOPAEvaluator
participant BaseEval as basePolicyEvaluator
participant OPAEngine as conftest.Engine
User->>CLI: ec validate image --policy <policy>
CLI->>ValidateCmd: invoke validate
ValidateCmd->>AppSnapshot: BuildInput(ctx)
AppSnapshot-->>ValidateCmd: ParsedInput + input JSON
ValidateCmd->>OPAEvaluator: NewOPAEvaluator(ctx, policySources, ...)
OPAEvaluator->>BaseEval: init resolver & workspace
ValidateCmd->>OPAEvaluator: Evaluate(ctx, EvaluationTarget{ParsedInput, Inputs})
OPAEvaluator->>OPAEvaluator: ensureInitialized (download/inspect)
OPAEvaluator->>OPAEngine: compile engine with policies & data
OPAEvaluator->>OPAEngine: evaluate namespace queries (exceptions first)
OPAEngine-->>OPAEvaluator: query results
OPAEvaluator->>BaseEval: postProcessResults (enrich, filter, synthesize successes)
BaseEval-->>ValidateCmd: Outcomes (failures/warnings/passes)
ValidateCmd-->>User: JSON report + exit code
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
internal/image/validate.go (1)
109-119: ⚡ Quick winAvoid building the same policy input twice per image.
BuildInputalready marshals/unmarshals the full payload, andWriteInputFilethen marshals it again immediately afterward. On the conftest path this is pure overhead in a hot loop, and it also creates two places that must stay semantically identical. Prefer a single shared builder, or only populateParsedInputwhen an OPA evaluator is actually present.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/image/validate.go` around lines 109 - 119, The code currently calls BuildInput and then WriteInputFile, causing duplicate marshaling; instead call BuildInput once and reuse its result when writing files or populating ParsedInput. Modify WriteInputFile (or add an overload) to accept the already-built inputJSON/inputMap from BuildInput so it does not re-marshal, and only populate a.ParsedInput (or run the OPA/conftest path) when an OPA evaluator/conftest is actually present; update callers to pass the BuildInput output into WriteInputFile or skip WriteInputFile when no evaluator is used (referencing BuildInput, WriteInputFile, and ParsedInput in your changes).internal/evaluator/opa_evaluator.go (1)
243-267: ⚡ Quick winCache rule discovery per namespace after compilation.
This rescans every compiled module for every
(namespace, input)pair, then does an O(n²) dedupe on top. With many inputs, rule discovery becomes part of the hot path even thougho.engine.Modules()is immutable aftercompileEngine(). Precomputing amap[string][]stringonce during initialization would remove that repeated AST walk and simplifyqueryNamespace.As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/evaluator/opa_evaluator.go` around lines 243 - 267, Precompute and cache discovery of OPA failure/warning rule names per namespace during engine initialization (e.g., in compileEngine or immediately after it) instead of recomputing inside queryNamespace by walking o.engine.Modules() each time; build a map[string][]string (namespace -> ruleNames) using the same filtering logic (isOPAFailure/isOPAWarning and dedupe) and store it on the evaluator struct, update usages of ruleNames/ruleCount in queryNamespace to read from that cache, and ensure cache is populated once after compileEngine() so o.engine.Modules() is not rescanned for every (namespace, input) pair.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@internal/evaluator/base_evaluator.go`:
- Around line 332-333: computeSuccesses is calling FilterResults with time.Now()
and discarding its returned missingIncludes; change those calls to pass the
evaluator's effective time (use b.effectiveTime or the evaluator's effectiveTime
field) instead of time.Now(), and capture/propagate the missingIncludes return
value so result.Successes is computed with the same effective time and the
missingIncludes set is not lost; apply the same fix to the other FilterResults
call in the same file (the block around the alternate call at ~406-412) so both
successes and the other filter use the evaluator effective time and preserve
missingIncludes.
- Around line 307-325: The loop always constructs
NewUnifiedPostEvaluationFilter, which overrides any custom filter set by
NewConftestEvaluatorWithPostEvaluationFilter; change the code to use the
evaluator's configured post-evaluation filter (e.g., b.postEvaluationFilter)
when present, falling back to NewUnifiedPostEvaluationFilter(b.policyResolver)
only if b.postEvaluationFilter is nil, and then call FilterResults and
CategorizeResults on that chosen filter instance so custom filters are preserved
(apply this to the variables used around FilterResults/CategorizeResults).
---
Nitpick comments:
In `@internal/evaluator/opa_evaluator.go`:
- Around line 243-267: Precompute and cache discovery of OPA failure/warning
rule names per namespace during engine initialization (e.g., in compileEngine or
immediately after it) instead of recomputing inside queryNamespace by walking
o.engine.Modules() each time; build a map[string][]string (namespace ->
ruleNames) using the same filtering logic (isOPAFailure/isOPAWarning and dedupe)
and store it on the evaluator struct, update usages of ruleNames/ruleCount in
queryNamespace to read from that cache, and ensure cache is populated once after
compileEngine() so o.engine.Modules() is not rescanned for every (namespace,
input) pair.
In `@internal/image/validate.go`:
- Around line 109-119: The code currently calls BuildInput and then
WriteInputFile, causing duplicate marshaling; instead call BuildInput once and
reuse its result when writing files or populating ParsedInput. Modify
WriteInputFile (or add an overload) to accept the already-built
inputJSON/inputMap from BuildInput so it does not re-marshal, and only populate
a.ParsedInput (or run the OPA/conftest path) when an OPA evaluator/conftest is
actually present; update callers to pass the BuildInput output into
WriteInputFile or skip WriteInputFile when no evaluator is used (referencing
BuildInput, WriteInputFile, and ParsedInput in your changes).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: 46c4d173-fd5a-48f1-94ce-2a52bf4b8246
📒 Files selected for processing (9)
cmd/validate/image.gointernal/evaluation_target/application_snapshot_image/application_snapshot_image.gointernal/evaluator/base_evaluator.gointernal/evaluator/conftest_evaluator.gointernal/evaluator/conftest_evaluator_unit_data_test.gointernal/evaluator/evaluator.gointernal/evaluator/opa_evaluator.gointernal/evaluator/opa_evaluator_test.gointernal/image/validate.go
| for _, result := range runResults { | ||
| unifiedFilter := NewUnifiedPostEvaluationFilter(b.policyResolver) | ||
|
|
||
| allResults := []Result{} | ||
| allResults = append(allResults, result.Warnings...) | ||
| allResults = append(allResults, result.Failures...) | ||
| allResults = append(allResults, result.Exceptions...) | ||
| allResults = append(allResults, result.Skipped...) | ||
|
|
||
| for j := range allResults { | ||
| addRuleMetadata(ctx, &allResults[j], b.rules) | ||
| } | ||
|
|
||
| filteredResults, updatedMissingIncludes := unifiedFilter.FilterResults( | ||
| allResults, b.allRules, target.Target, missingIncludes, effectiveTime) | ||
| missingIncludes = updatedMissingIncludes | ||
|
|
||
| warnings, failures, exceptions, skipped := unifiedFilter.CategorizeResults( | ||
| filteredResults, result, effectiveTime) |
There was a problem hiding this comment.
Preserve caller-supplied post-evaluation filters.
postProcessResults now always creates NewUnifiedPostEvaluationFilter(...), so NewConftestEvaluatorWithPostEvaluationFilter no longer changes behavior after this refactor. That is a functional regression for any path relying on a custom filter.
Suggested fix
-func (b *basePolicyEvaluator) postProcessResults(ctx context.Context, runResults []Outcome, target EvaluationTarget) ([]Outcome, error) {
+func (b *basePolicyEvaluator) postProcessResults(ctx context.Context, runResults []Outcome, target EvaluationTarget, filter PostEvaluationFilter) ([]Outcome, error) {
...
for _, result := range runResults {
- unifiedFilter := NewUnifiedPostEvaluationFilter(b.policyResolver)
+ unifiedFilter := filter
+ if unifiedFilter == nil {
+ unifiedFilter = NewUnifiedPostEvaluationFilter(b.policyResolver)
+ }
...
}-return c.postProcessResults(ctx, runResults, target)
+return c.postProcessResults(ctx, runResults, target, c.postEvaluationFilter)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@internal/evaluator/base_evaluator.go` around lines 307 - 325, The loop always
constructs NewUnifiedPostEvaluationFilter, which overrides any custom filter set
by NewConftestEvaluatorWithPostEvaluationFilter; change the code to use the
evaluator's configured post-evaluation filter (e.g., b.postEvaluationFilter)
when present, falling back to NewUnifiedPostEvaluationFilter(b.policyResolver)
only if b.postEvaluationFilter is nil, and then call FilterResults and
CategorizeResults on that chosen filter instance so custom filters are preserved
(apply this to the variables used around FilterResults/CategorizeResults).
Codecov Report❌ Patch coverage is
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 1 file with indirect coverage changes 🚀 New features to boost your workflow:
|
8dbf508 to
adbdb0a
Compare
Add a new opaEvaluator that evaluates policies directly via OPA's rego API instead of going through conftest's runner. This eliminates the conftest runner overhead while reusing conftest's Engine for policy compilation and data loading. Extract shared infrastructure into basePolicyEvaluator to eliminate ~400 lines of duplication between the two evaluators. Both now share policy download/inspection, data directory preparation, capabilities management, post-processing, and success computation. Key changes: - New opaEvaluator with direct rego.Eval() queries matching conftest's engine.Check() semantics (same regexes, exception handling, success counting) - basePolicyEvaluator embedded struct with 12 shared methods - sync.Once lazy initialization on both evaluators to prevent data races from concurrent worker goroutines - BuildInput() on ApplicationSnapshotImage for in-memory OPA input delivery without disk I/O - ParsedInput field on EvaluationTarget for passing pre-parsed input - EC_USE_OPA=1 environment variable gate for selecting the OPA evaluator Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
BuildInput was missing ComponentName and PolicySpec fields that WriteInputFile includes, causing acceptance test snapshot failures. Also fix gci import ordering in fallback.go and tidy acceptance/go.mod. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add unit tests covering evalOPAQuery, queryNamespace, evaluateWithEngine, helper functions (isOPAFailure, isOPAWarning, stripRulePrefix), input parsing, and base evaluator methods (prepareDataDirs, computeSuccesses, postProcessResults, createDataDirectory, createCapabilitiesFile, initWorkDir, initPolicyResolver, resolveFilteredNamespaces, isResultIncluded). Also fix gci import ordering in base_evaluator.go and opa_evaluator.go. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Full end-to-end integration tests for the OPA evaluator covering: - Basic creation and capabilities path - Evaluation with file-based and parsed input - Deny/warn semantics (both triggered, warn only, all pass) - Component-name-based VolatileConfig exclude filtering Mirrors the existing conftest evaluator integration tests. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Run both evaluators against the same policy and input, assert identical outcomes (failure/warning/success codes and messages). Covers deny, warn, conditional rules, multiple rules, parsed vs file input, and component-name-based VolatileConfig filtering. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Inject EC_USE_OPA from the test runner's process environment into every acceptance scenario's ec binary invocation, enabling `EC_USE_OPA=1 make acceptance` to run the full suite with the OPA evaluator. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add validate_image_opa.feature (7 scenarios) and validate_input_opa.feature (2 scenarios) that exercise the OPA evaluator end-to-end via EC_USE_OPA=1. Covers happy day, rejection, multiple sources, rule filtering, future deny conversion, volatile config, and input validation paths. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Generated by running the OPA acceptance test scenarios locally. These snapshots enable CI to validate OPA evaluator output matches expected results. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cmd/validate/image.go (1)
320-334:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winMisleading debug log when OPA evaluator fails to initialize.
The error path logs
"Failed to initialize the conftest evaluator!"regardless of which evaluator branch was taken. With the new OPA path this will be confusing during debugging — use a branch-aware message (or include the evaluator type).Suggested fix
if err != nil { - log.Debug("Failed to initialize the conftest evaluator!") + log.Debugf("Failed to initialize evaluator (opa=%v): %v", utils.IsOpaEnabled(), err) return err }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@cmd/validate/image.go` around lines 320 - 334, The debug message is ambiguous when evaluator initialization fails; update the error logging in the block that calls newOPAEvaluator and evaluator.NewConftestEvaluatorWithFilterType so the log includes which evaluator failed (e.g., "Failed to initialize OPA evaluator" when utils.IsOpaEnabled() branch is taken, and "Failed to initialize Conftest evaluator with filter" for the other branch) or include the evaluator type in the single message; modify the log.Debug call that follows the calls to newOPAEvaluator and evaluator.NewConftestEvaluatorWithFilterType to reflect the active evaluator (reference newOPAEvaluator, evaluator.NewConftestEvaluatorWithFilterType, and utils.IsOpaEnabled()) so debugging shows the correct evaluator type.
🧹 Nitpick comments (3)
internal/evaluator/evaluator_comparison_test.go (2)
130-140: ⚡ Quick winLocal variable
osshadows theospackage import.Inside
assertSameOutcomes,os := summarizeOutcomes(opaResults)shadows the importedospackage. It's harmless today (noos.*use in this function) but is confusing and a footgun for future edits. Rename the local to e.g.oo/opaSummary.Suggested fix
-func assertSameOutcomes(t *testing.T, label string, conftestResults, opaResults []Outcome) { - t.Helper() - cs := summarizeOutcomes(conftestResults) - os := summarizeOutcomes(opaResults) - - assert.Equal(t, cs.failureCodes, os.failureCodes, "%s: failure codes differ", label) - assert.Equal(t, cs.warningCodes, os.warningCodes, "%s: warning codes differ", label) - assert.Equal(t, cs.successCodes, os.successCodes, "%s: success codes differ", label) - assert.Equal(t, cs.failureMsgs, os.failureMsgs, "%s: failure messages differ", label) - assert.Equal(t, cs.warningMsgs, os.warningMsgs, "%s: warning messages differ", label) -} +func assertSameOutcomes(t *testing.T, label string, conftestResults, opaResults []Outcome) { + t.Helper() + cs := summarizeOutcomes(conftestResults) + oo := summarizeOutcomes(opaResults) + + assert.Equal(t, cs.failureCodes, oo.failureCodes, "%s: failure codes differ", label) + assert.Equal(t, cs.warningCodes, oo.warningCodes, "%s: warning codes differ", label) + assert.Equal(t, cs.successCodes, oo.successCodes, "%s: success codes differ", label) + assert.Equal(t, cs.failureMsgs, oo.failureMsgs, "%s: failure messages differ", label) + assert.Equal(t, cs.warningMsgs, oo.warningMsgs, "%s: warning messages differ", label) +}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/evaluator/evaluator_comparison_test.go` around lines 130 - 140, The local variable os in assertSameOutcomes shadows the imported os package; rename it (e.g., opaSummary or oo) and update all subsequent uses (the comparisons of failureCodes, warningCodes, successCodes, failureMsgs, warningMsgs) to reference the new variable; locate the variable created by calling summarizeOutcomes(opaResults) inside assertSameOutcomes and change that identifier and its uses so the os package import is no longer shadowed.
130-140: ⚡ Quick winException messages are summarized but never compared.
summarizeOutcomespopulatesexceptionMsgs, butassertSameOutcomesdoes not assert on it, so any divergence in exception handling between conftest and OPA goes unnoticed. Either drop the field or include it in the equality checks to actually catch regressions.Suggested fix
assert.Equal(t, cs.failureMsgs, os.failureMsgs, "%s: failure messages differ", label) assert.Equal(t, cs.warningMsgs, os.warningMsgs, "%s: warning messages differ", label) + assert.Equal(t, cs.exceptionMsgs, os.exceptionMsgs, "%s: exception messages differ", label) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/evaluator/evaluator_comparison_test.go` around lines 130 - 140, The test helper assertSameOutcomes omits comparing the exceptionMsgs field produced by summarizeOutcomes, so add an assertion comparing cs.exceptionMsgs and os.exceptionMsgs (e.g., using assert.Equal with the same "%s: exception messages differ" label) to catch divergences between conftest and OPA; alternatively, if exceptionMsgs is intentionally irrelevant, remove/populate it consistently in summarizeOutcomes so it no longer exists as a source of silent differences. Ensure references to summarizeOutcomes, assertSameOutcomes, and the exceptionMsgs field are updated accordingly.features/validate_image_opa.feature (1)
110-119: 💤 Low valueInline JSON
--policyargument is fragile.Unlike the other scenarios that reference a named
ec-policy, this one inlines JSON directly on the command line. The acceptance harness splits args by literal space (strings.Split(args, " ")), so this only works because the JSON has no spaces — any future formatting (e.g. adding a space after:or,) will silently break the scenario. Consider switching to a named policy configuration like the other scenarios for consistency and robustness.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@features/validate_image_opa.feature` around lines 110 - 119, The scenario in features/validate_image_opa.feature inlines JSON into the --policy argument which is fragile due to the test harness splitting args on spaces; replace the inline JSON with a named ec-policy like the other scenarios: add a "Given an ec-policy named \"future-deny\" with" block that contains the same git source configuration (e.g. the git::https://... future-deny-policy repo), and update the ec command to use the policy variable reference (e.g. --policy ${future-deny_POLICY}) instead of the inline JSON so the harness passes the policy reliably.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Outside diff comments:
In `@cmd/validate/image.go`:
- Around line 320-334: The debug message is ambiguous when evaluator
initialization fails; update the error logging in the block that calls
newOPAEvaluator and evaluator.NewConftestEvaluatorWithFilterType so the log
includes which evaluator failed (e.g., "Failed to initialize OPA evaluator" when
utils.IsOpaEnabled() branch is taken, and "Failed to initialize Conftest
evaluator with filter" for the other branch) or include the evaluator type in
the single message; modify the log.Debug call that follows the calls to
newOPAEvaluator and evaluator.NewConftestEvaluatorWithFilterType to reflect the
active evaluator (reference newOPAEvaluator,
evaluator.NewConftestEvaluatorWithFilterType, and utils.IsOpaEnabled()) so
debugging shows the correct evaluator type.
---
Nitpick comments:
In `@features/validate_image_opa.feature`:
- Around line 110-119: The scenario in features/validate_image_opa.feature
inlines JSON into the --policy argument which is fragile due to the test harness
splitting args on spaces; replace the inline JSON with a named ec-policy like
the other scenarios: add a "Given an ec-policy named \"future-deny\" with" block
that contains the same git source configuration (e.g. the git::https://...
future-deny-policy repo), and update the ec command to use the policy variable
reference (e.g. --policy ${future-deny_POLICY}) instead of the inline JSON so
the harness passes the policy reliably.
In `@internal/evaluator/evaluator_comparison_test.go`:
- Around line 130-140: The local variable os in assertSameOutcomes shadows the
imported os package; rename it (e.g., opaSummary or oo) and update all
subsequent uses (the comparisons of failureCodes, warningCodes, successCodes,
failureMsgs, warningMsgs) to reference the new variable; locate the variable
created by calling summarizeOutcomes(opaResults) inside assertSameOutcomes and
change that identifier and its uses so the os package import is no longer
shadowed.
- Around line 130-140: The test helper assertSameOutcomes omits comparing the
exceptionMsgs field produced by summarizeOutcomes, so add an assertion comparing
cs.exceptionMsgs and os.exceptionMsgs (e.g., using assert.Equal with the same
"%s: exception messages differ" label) to catch divergences between conftest and
OPA; alternatively, if exceptionMsgs is intentionally irrelevant,
remove/populate it consistently in summarizeOutcomes so it no longer exists as a
source of silent differences. Ensure references to summarizeOutcomes,
assertSameOutcomes, and the exceptionMsgs field are updated accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: 51611af9-0a4c-434d-9ddf-f896de61fbc2
⛔ Files ignored due to path filters (2)
features/__snapshots__/validate_image_opa.snapis excluded by!**/*.snapfeatures/__snapshots__/validate_input_opa.snapis excluded by!**/*.snap
📒 Files selected for processing (15)
acceptance/cli/cli.goacceptance/go.modcmd/validate/image.gofeatures/validate_image_opa.featurefeatures/validate_input_opa.featureinternal/evaluation_target/application_snapshot_image/application_snapshot_image.gointernal/evaluator/base_evaluator.gointernal/evaluator/conftest_evaluator_unit_data_test.gointernal/evaluator/evaluator.gointernal/evaluator/evaluator_comparison_test.gointernal/evaluator/opa_evaluator.gointernal/evaluator/opa_evaluator_integration_test.gointernal/evaluator/opa_evaluator_test.gointernal/image/validate.gointernal/validate/vsa/fallback.go
💤 Files with no reviewable changes (1)
- internal/evaluator/conftest_evaluator_unit_data_test.go
✅ Files skipped from review due to trivial changes (2)
- acceptance/go.mod
- internal/evaluation_target/application_snapshot_image/application_snapshot_image.go
🚧 Files skipped from review as they are similar to previous changes (5)
- internal/evaluator/evaluator.go
- internal/image/validate.go
- internal/evaluator/opa_evaluator_test.go
- internal/evaluator/opa_evaluator.go
- internal/evaluator/base_evaluator.go
Success filtering was using time.Now() instead of the evaluator's effective time, causing inconsistent behavior for backdated runs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@internal/evaluator/conftest_evaluator.go`:
- Around line 861-863: FilterResults is returning an updated missingIncludes map
but the call at filteredResults, _ :=
unifiedFilter.FilterResults([]Result{success}, rules, imageRef, componentName,
missingIncludes, effectiveTime) discards it; update the call to capture and
reassign the returned missingIncludes (e.g., filteredResults, missingIncludes =
unifiedFilter.FilterResults(...)) so later logic sees include criteria satisfied
by success results; ensure you propagate that reassigned missingIncludes
variable wherever it is subsequently used.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Enterprise
Run ID: a7e82c60-3bb0-4fc1-8970-15b8b361e652
📒 Files selected for processing (4)
internal/evaluator/base_evaluator.gointernal/evaluator/conftest_evaluator.gointernal/evaluator/conftest_evaluator_unit_filtering_test.gointernal/evaluator/opa_evaluator_test.go
✅ Files skipped from review due to trivial changes (1)
- internal/evaluator/base_evaluator.go
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/evaluator/opa_evaluator_test.go
|
/retest |
Add a new opaEvaluator that evaluates policies directly via OPA's rego API instead of going through conftest's runner. This eliminates the conftest runner overhead while reusing conftest's Engine for policy compilation and data loading.
Extract shared infrastructure into basePolicyEvaluator to eliminate ~400 lines of duplication between the two evaluators. Both now share policy download/inspection, data directory preparation, capabilities management, post-processing, and success computation.
Key changes: